pmi vector
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
Reviews: What the Vec? Towards Probabilistically Grounded Embeddings
This paper's view is novel and relatively solid. It provides a perspective for understanding the semantic similarity in word embedding, by (1) showing via space geometry that different semantic compositionality can be captured by PMI vectors (2) the linear projection between the PMI vectors and word embedding can preserve properties in (1). To me, the best part of the paper is that the author makes an effort to give a systematic and mathematically well-formed analysis addressing the frequently mentioned but not fully understood semantic issues in word embedding. The paper also derives a new model with LSQ loss in section 5 which achieves better performance and thus justified the previous analysis to some extent. My biggest concern lies in the absence of the understanding of COSINE similarity.
Towards a Theoretical Understanding of Word and Relation Representation
Representing words by vectors, or embeddings, enables computational reasoning and is foundational to automating natural language tasks. For example, if word embeddings of similar words contain similar values, word similarity can be readily assessed, whereas judging that from their spelling is often impossible (e.g. cat /feline) and to predetermine and store similarities between all words is prohibitively time-consuming, memory intensive and subjective. We focus on word embeddings learned from text corpora and knowledge graphs. Several well-known algorithms learn word embeddings from text on an unsupervised basis by learning to predict those words that occur around each word, e.g. word2vec and GloVe. Parameters of such word embeddings are known to reflect word co-occurrence statistics, but how they capture semantic meaning has been unclear. Knowledge graph representation models learn representations both of entities (words, people, places, etc.) and relations between them, typically by training a model to predict known facts in a supervised manner. Despite steady improvements in fact prediction accuracy, little is understood of the latent structure that enables this. The limited understanding of how latent semantic structure is encoded in the geometry of word embeddings and knowledge graph representations makes a principled means of improving their performance, reliability or interpretability unclear. To address this: 1. we theoretically justify the empirical observation that particular geometric relationships between word embeddings learned by algorithms such as word2vec and GloVe correspond to semantic relations between words; and 2. we extend this correspondence between semantics and geometry to the entities and relations of knowledge graphs, providing a model for the latent structure of knowledge graph representation linked to that of word embeddings.
- Europe > Slovenia > Coastal-Karst > Municipality of Koper > Koper (0.04)
- North America > Central America (0.04)
- Europe > United Kingdom > Scotland (0.04)
- (28 more...)
- Leisure & Entertainment > Sports > Baseball (0.67)
- Law (0.67)
- Government > Regional Government > North America Government > United States Government (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- (2 more...)